Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using this dataset from Oxford of 102 flower categories, you can see a few examples below.

The project is broken down into multiple steps:
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
To ensure we can download the latest version of the oxford_flowers102 dataset, let's first install both tensorflow-datasets and tfds-nightly.
tensorflow-datasets is the stable version that is released on a cadence of every few monthstfds-nightly is released every day and has the latest version of the datasetsWe'll also upgrade TensorFlow to ensure we have a version that is compatible with the latest version of the dataset.
#%pip --no-cache-dir install tensorflow-datasets --user
#%pip --no-cache-dir install tfds-nightly --user
#%pip --no-cache-dir install --upgrade tensorflow --user
After the above installations have finished be sure to restart the kernel. You can do this by going to Kernel > Restart.
# Import TensorFlow
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
# Ignore some warnings that are not relevant (you can remove this if you prefer)
import warnings
warnings.filterwarnings('ignore')
2022-08-17 16:32:12.339391: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-08-17 16:32:12.339424: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
# TODO: Make all other necessary imports.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from PIL import Image
import os
import glob
import time
import json
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
print('Using:')
print('\t\u2022 TensorFlow version:', tf.__version__)
print('\t\u2022 tf.keras version:', tf.keras.__version__)
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
Using: • TensorFlow version: 2.9.1 • tf.keras version: 2.9.0 • GPU device not found. Running on CPU
2022-08-17 16:32:18.593872: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-08-17 16:32:18.597382: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2022-08-17 16:32:18.597419: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2022-08-17 16:32:18.597444: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (duplex-HP-ProBook-430-G2): /proc/driver/nvidia/version does not exist
# Some other recommended settings:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tfds.disable_progress_bar()
Here you'll use tensorflow_datasets to load the Oxford Flowers 102 dataset. This dataset has 3 splits: 'train', 'test', and 'validation'. You'll also need to make sure the training data is normalized and resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet, but you'll still need to normalize and resize the images to the appropriate size.
# TODO: Load the dataset with TensorFlow Datasets. Hint: use tfds.load()
data_source_name = 'oxford_flowers102'
dataset, dataset_info = tfds.load('oxford_flowers102', as_supervised=True, with_info=True)
# TODO: Create a training set, a validation set and a test set.
training_set,test_set, validation_set = dataset['train'], dataset['test'], dataset['validation']
# Display the dataset_info
#dataset_info
# TODO: Get the number of examples in each set from the dataset info.
nb_training_samples = 0
nb_test_samples = 0
nb_validation_samples = 0
#Number of examples in traninig set
for i in training_set:
nb_training_samples += 1
#Number of examples in test set
for i in test_set:
nb_test_samples += 1
#Number of examples in validation set
for i in validation_set:
nb_validation_samples += 1
f"number of training images : {nb_training_samples} number of test images : {nb_test_samples} number of validation images : {nb_validation_samples}"
'number of training images : 1020 number of test images : 6149 number of validation images : 1020'
# TODO: Get the number of classes in the dataset from the dataset info.
nb_classes = dataset_info.features['label'].num_classes
f'number of class : {nb_classes}'
'number of class : 102'
# TODO: Print the shape and corresponding label of 3 images in the training set.
for image, label in training_set.take(3):
print ("shape: {} label {}".format(image.shape, label.numpy()))
shape: (500, 667, 3) label 72 shape: (500, 666, 3) label 84 shape: (670, 500, 3) label 70
# TODO: Plot 1 image from the training set.
for image, label in training_set.take(1):
#image = image.numpy().squeeze()
image = image.numpy()
plt.imshow(image, cmap= plt.cm.binary)
plt.colorbar()
plt.title("Image Label : {} ".format(label.numpy()))
plt.show()
# Set the title of the plot to the corresponding image label.
You'll also need to load in a mapping from label to category name. You can find this in the file label_map.json. It's a JSON object which you can read in with the json module. This will give you a dictionary mapping the integer coded labels to the actual names of the flowers.
with open('label_map.json', 'r') as f:
class_names = json.load(f)
#class_names
class_names["72"]
'azalea'
# TODO: Plot 1 image from the training set. Set the title
# of the plot to the corresponding class name.
# TODO: Plot 1 image from the training set.
for image, label in training_set.take(1):
#image = image.numpy().squeeze()
image = image.numpy()
plt.imshow(image, cmap= plt.cm.binary)
plt.colorbar()
plt.title("Image Label : {} ".format(class_names[str(label.numpy())]))
plt.show()
# Set the title of the plot to the corresponding image label.
# TODO: Create a pipeline for each set.
image_size = 224
def preprocess_image(image, label):
#Resize image
image = tf.image.resize(image, (image_size, image_size))
image /= 255.0
return image, label
batch_size= 64
training_batches = training_set.cache().shuffle(nb_training_samples//4).map(preprocess_image).batch(batch_size).prefetch(1)
validation_batches = validation_set.cache().map(preprocess_image).batch(batch_size).prefetch(1)
test_batches = test_set.cache().map(preprocess_image).batch(batch_size).prefetch(1)
Now that the data is ready, it's time to build and train the classifier. You should use the MobileNet pre-trained model from TensorFlow Hub to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students!
Refer to the rubric for guidance on successfully completing this section. Things you'll need to do:
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right.
Note for Workspace users: One important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. Also, If your model is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with ls -lh), you should reduce the size of your hidden layers and train again.
# TODO: Build and train your network.
from tensorflow.keras import layers
#Set the image input shape
input_shape = (image_size, image_size, 3)
#Fownload the pretrained MobileNet model without the final classification
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,input_shape= input_shape)
# freeze the weights and biases in our pre-trained model
feature_extractor.trainable = False
#Build the model
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(nb_classes, activation = 'softmax')
])
#model.summary()
#Check is gpu is availble
f'Is there a GPU Available: {tf.test.is_gpu_available()}'
'Is there a GPU Available: False'
## Solution
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 10
# Stop training when there is no improvement in the validation loss for 5 consecutive epochs
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
history = model.fit(training_batches,
epochs=EPOCHS,
validation_data=validation_batches,
callbacks=[early_stopping],
)
Epoch 1/10
2022-08-17 16:32:33.820903: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 102760448 exceeds 10% of free system memory. 2022-08-17 16:32:34.143087: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 102760448 exceeds 10% of free system memory. 2022-08-17 16:32:34.501932: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 102760448 exceeds 10% of free system memory. 2022-08-17 16:32:34.591195: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 51380224 exceeds 10% of free system memory. 2022-08-17 16:32:34.624679: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 308281344 exceeds 10% of free system memory.
16/16 [==============================] - 61s 4s/step - loss: 4.5092 - accuracy: 0.0578 - val_loss: 3.6274 - val_accuracy: 0.2382 Epoch 2/10 16/16 [==============================] - 50s 3s/step - loss: 2.7886 - accuracy: 0.5373 - val_loss: 2.6099 - val_accuracy: 0.5520 Epoch 3/10 16/16 [==============================] - 51s 3s/step - loss: 1.7591 - accuracy: 0.8206 - val_loss: 1.9966 - val_accuracy: 0.6706 Epoch 4/10 16/16 [==============================] - 50s 3s/step - loss: 1.1331 - accuracy: 0.9206 - val_loss: 1.6403 - val_accuracy: 0.7216 Epoch 5/10 16/16 [==============================] - 51s 3s/step - loss: 0.7931 - accuracy: 0.9588 - val_loss: 1.4289 - val_accuracy: 0.7441 Epoch 6/10 16/16 [==============================] - 51s 3s/step - loss: 0.5788 - accuracy: 0.9794 - val_loss: 1.2846 - val_accuracy: 0.7510 Epoch 7/10 16/16 [==============================] - 58s 4s/step - loss: 0.4464 - accuracy: 0.9814 - val_loss: 1.1918 - val_accuracy: 0.7667 Epoch 8/10 16/16 [==============================] - 75s 5s/step - loss: 0.3497 - accuracy: 0.9941 - val_loss: 1.1205 - val_accuracy: 0.7765 Epoch 9/10 16/16 [==============================] - 56s 4s/step - loss: 0.2842 - accuracy: 0.9971 - val_loss: 1.0654 - val_accuracy: 0.7863 Epoch 10/10 16/16 [==============================] - 53s 3s/step - loss: 0.2340 - accuracy: 0.9980 - val_loss: 1.0215 - val_accuracy: 0.7902
# Check that history.history is a dictionary
print('history.history has type:', type(history.history))
# Print the keys of the history.history dictionary
print('\nThe keys of history.history are:', list(history.history.keys()))
history.history has type: <class 'dict'> The keys of history.history are: ['loss', 'accuracy', 'val_loss', 'val_accuracy']
# TODO: Plot the loss and accuracy values achieved during training for the training and validation set.
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs_range=range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, training_accuracy, label='Training Accuracy')
plt.plot(epochs_range, validation_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, training_loss, label='Training Loss')
plt.plot(epochs_range, validation_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
# TODO: Print the loss and accuracy values achieved on the entire test set.
loss, accuracy = model.evaluate(test_batches)
f'Test set loss : {loss} Test accuracy : {accuracy}'
97/97 [==============================] - 184s 2s/step - loss: 1.1296 - accuracy: 0.7657
'Test set loss : 1.1295673847198486 Test accuracy : 0.7656529545783997'
Now that your network is trained, save the model so you can load it later for making inference. In the cell below save your model as a Keras model (i.e. save it as an HDF5 file).
# TODO: Save your trained model as a Keras model.
t = time.time()
model_path = "./{}.h5".format(int(t))
model.save(model_path)
print(model_path)
./1660747490.h5
Load the Keras model you saved above.
# TODO: Load the Keras model
reloaded = tf.keras.models.load_model(
model_path,
# `custom_objects` tells keras how to load a `hub.KerasLayer`
custom_objects={'KerasLayer': hub.KerasLayer})
reloaded.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1280) 2257984
dense (Dense) (None, 102) 130662
=================================================================
Total params: 2,388,646
Trainable params: 130,662
Non-trainable params: 2,257,984
_________________________________________________________________
Now you'll write a function that uses your trained network for inference. Write a function called predict that takes an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
The predict function will also need to handle pre-processing the input image such that it can be used by your model. We recommend you write a separate function called process_image that performs the pre-processing. You can then call the process_image function from the predict function.
The process_image function should take in an image (in the form of a NumPy array) and return an image in the form of a NumPy array with shape (224, 224, 3).
First, you should convert your image into a TensorFlow Tensor and then resize it to the appropriate size using tf.image.resize.
Second, the pixel values of the input images are typically encoded as integers in the range 0-255, but the model expects the pixel values to be floats in the range 0-1. Therefore, you'll also need to normalize the pixel values.
Finally, convert your image back to a NumPy array using the .numpy() method.
# TODO: Create the process_image function
def process_image(image):
#Resize image
image = tf.image.resize(image, (image_size, image_size))
image/= 255.0
image.numpy()
return image
To check your process_image function we have provided 4 images in the ./test_images/ folder:
The code below loads one of the above images using PIL and plots the original image alongside the image produced by your process_image function. If your process_image function works, the plotted image should be the correct size.
from PIL import Image
image_path = './test_images/hard-leaved_pocket_orchid.jpg'
im = Image.open(image_path)
test_image = np.asarray(im)
processed_test_image = process_image(test_image)
fig, (ax1, ax2) = plt.subplots(figsize=(10,10), ncols=2)
ax1.imshow(test_image)
ax1.set_title('Original Image')
ax2.imshow(processed_test_image)
ax2.set_title('Processed Image')
plt.tight_layout()
plt.show()
Once you can get images in the correct format, it's time to write the predict function for making inference with your model.
Remember, the predict function should take an image, a model, and then returns the top $K$ most likely class labels along with the probabilities. The function call should look like:
probs, classes = predict(image_path, model, top_k)
If top_k=5 the output of the predict function should be something like this:
probs, classes = predict(image_path, model, 5)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
Your predict function should use PIL to load the image from the given image_path. You can use the Image.open function to load the images. The Image.open() function returns an Image object. You can convert this Image object to a NumPy array by using the np.asarray() function.
Note: The image returned by the process_image function is a NumPy array with shape (224, 224, 3) but the model expects the input images to be of shape (1, 224, 224, 3). This extra dimension represents the batch size. We suggest you use the np.expand_dims() function to add the extra dimension.
# TODO: Create the predict function
def predict(image_path, model, top_k):
#Load the image
image = Image.open(image_path)
#convert image to numpy array
image_array = np.asarray(image)
#Process the image
image_processed = process_image(image_array)
#add extra dimension to the image
image_batch_size = np.expand_dims(image_processed, axis=0)
#Make the prediction
predict = model.predict(image_batch_size)
#Finds values and indices of the k top probabilities
top_k_values, top_k_indices = tf.math.top_k(predict, top_k)
top_k_classes = [class_names[str(index + 1)] for index in top_k_indices.numpy()[0]]
return top_k_values.numpy()[0], top_k_classes
It's always good to check the predictions made by your model to make sure they are correct. To check your predictions we have provided 4 images in the ./test_images/ folder:
In the cell below use matplotlib to plot the input image alongside the probabilities for the top 5 classes predicted by your model. Plot the probabilities as a bar graph. The plot should look like this:

You can convert from the class integer labels to actual flower names using class_names.
# TODO: Plot the input image along with the top 5 classes
test_image_path = './test_images'
search_criteria = "*.jpg"
test_images_folder = os.path.join(test_image_path, search_criteria)
print(test_images_folder )
# Get all of the images tiles
images_files = glob.glob(test_images_folder)
print(images_files)
./test_images/*.jpg ['./test_images/wild_pansy.jpg', './test_images/orange_dahlia.jpg', './test_images/hard-leaved_pocket_orchid.jpg', './test_images/cautleya_spicata.jpg']
def check_sanity(img_path):
fig, (ax1, ax2) = plt.subplots(figsize=(10,5), ncols=2)
image = mpimg.imread(img_path)
im = Image.open(image_path)
test_image = np.asarray(im)
processed_test_image = process_image(test_image)
probs, classes = predict(img_path, reloaded, 5)
#ax1.imshow(processed_test_image)
#ax1.imshow(test_image)
ax1.imshow(image)
ax2 = plt.barh(classes[::-1], probs[::-1])
plt.tight_layout()
plt.show()
for file in images_files :
check_sanity(file)
1/1 [==============================] - 1s 531ms/step
1/1 [==============================] - 0s 42ms/step
1/1 [==============================] - 0s 40ms/step
1/1 [==============================] - 0s 40ms/step
# TODO: Build and train your network.
from tensorflow.keras import layers
#Set the image input shape
input_shape = (image_size, image_size, 3)
#Fownload the pretrained MobileNet model without the final classification
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,input_shape= input_shape)
# freeze the weights and biases in our pre-trained model
feature_extractor.trainable = False
#Build the model
model1 = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(nb_classes, activation = 'softmax')
])
#model.summary()
model1.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
EPOCHS = 100
# Stop training when there is no improvement in the validation loss for 10 consecutive epochs
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
# Save the Model with the lowest validation loss
save_best = tf.keras.callbacks.ModelCheckpoint('./best_model.h5',
monitor='val_loss',
save_best_only=True)
history = model1.fit(training_batches,
epochs = EPOCHS,
validation_data=validation_batches,
callbacks=[early_stopping, save_best])
Epoch 1/100 16/16 [==============================] - 59s 4s/step - loss: 4.4483 - accuracy: 0.0725 - val_loss: 3.6044 - val_accuracy: 0.2431 Epoch 2/100 16/16 [==============================] - 75s 5s/step - loss: 2.7899 - accuracy: 0.5235 - val_loss: 2.6108 - val_accuracy: 0.5559 Epoch 3/100 16/16 [==============================] - 83s 5s/step - loss: 1.7478 - accuracy: 0.8225 - val_loss: 2.0033 - val_accuracy: 0.6794 Epoch 4/100 16/16 [==============================] - 83s 5s/step - loss: 1.1368 - accuracy: 0.9255 - val_loss: 1.6543 - val_accuracy: 0.7235 Epoch 5/100 16/16 [==============================] - 76s 5s/step - loss: 0.7912 - accuracy: 0.9608 - val_loss: 1.4385 - val_accuracy: 0.7490 Epoch 6/100 16/16 [==============================] - 72s 5s/step - loss: 0.5776 - accuracy: 0.9814 - val_loss: 1.2997 - val_accuracy: 0.7667 Epoch 7/100 16/16 [==============================] - 60s 4s/step - loss: 0.4395 - accuracy: 0.9882 - val_loss: 1.2048 - val_accuracy: 0.7745 Epoch 8/100 16/16 [==============================] - 59s 4s/step - loss: 0.3495 - accuracy: 0.9922 - val_loss: 1.1316 - val_accuracy: 0.7775 Epoch 9/100 16/16 [==============================] - 60s 4s/step - loss: 0.2813 - accuracy: 0.9971 - val_loss: 1.0800 - val_accuracy: 0.7902 Epoch 10/100 16/16 [==============================] - 60s 4s/step - loss: 0.2325 - accuracy: 1.0000 - val_loss: 1.0367 - val_accuracy: 0.7892 Epoch 11/100 16/16 [==============================] - 59s 4s/step - loss: 0.1951 - accuracy: 1.0000 - val_loss: 1.0020 - val_accuracy: 0.7951 Epoch 12/100 16/16 [==============================] - 62s 4s/step - loss: 0.1671 - accuracy: 1.0000 - val_loss: 0.9723 - val_accuracy: 0.7980 Epoch 13/100 16/16 [==============================] - 54s 3s/step - loss: 0.1439 - accuracy: 1.0000 - val_loss: 0.9507 - val_accuracy: 0.7990 Epoch 14/100 16/16 [==============================] - 49s 3s/step - loss: 0.1259 - accuracy: 1.0000 - val_loss: 0.9299 - val_accuracy: 0.8010 Epoch 15/100 16/16 [==============================] - 57s 4s/step - loss: 0.1110 - accuracy: 1.0000 - val_loss: 0.9119 - val_accuracy: 0.8029 Epoch 16/100 16/16 [==============================] - 60s 4s/step - loss: 0.0992 - accuracy: 1.0000 - val_loss: 0.8969 - val_accuracy: 0.8088 Epoch 17/100 16/16 [==============================] - 61s 4s/step - loss: 0.0888 - accuracy: 1.0000 - val_loss: 0.8814 - val_accuracy: 0.8069 Epoch 18/100 16/16 [==============================] - 52s 3s/step - loss: 0.0804 - accuracy: 1.0000 - val_loss: 0.8709 - val_accuracy: 0.8108 Epoch 19/100 16/16 [==============================] - 46s 3s/step - loss: 0.0731 - accuracy: 1.0000 - val_loss: 0.8571 - val_accuracy: 0.8108 Epoch 20/100 16/16 [==============================] - 46s 3s/step - loss: 0.0668 - accuracy: 1.0000 - val_loss: 0.8485 - val_accuracy: 0.8088 Epoch 21/100 16/16 [==============================] - 50s 3s/step - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.8397 - val_accuracy: 0.8049 Epoch 22/100 16/16 [==============================] - 47s 3s/step - loss: 0.0566 - accuracy: 1.0000 - val_loss: 0.8313 - val_accuracy: 0.8127 Epoch 23/100 16/16 [==============================] - 50s 3s/step - loss: 0.0525 - accuracy: 1.0000 - val_loss: 0.8239 - val_accuracy: 0.8147 Epoch 24/100 16/16 [==============================] - 46s 3s/step - loss: 0.0489 - accuracy: 1.0000 - val_loss: 0.8165 - val_accuracy: 0.8147 Epoch 25/100 16/16 [==============================] - 54s 3s/step - loss: 0.0456 - accuracy: 1.0000 - val_loss: 0.8112 - val_accuracy: 0.8118 Epoch 26/100 16/16 [==============================] - 61s 4s/step - loss: 0.0427 - accuracy: 1.0000 - val_loss: 0.8049 - val_accuracy: 0.8118 Epoch 27/100 16/16 [==============================] - 49s 3s/step - loss: 0.0400 - accuracy: 1.0000 - val_loss: 0.7985 - val_accuracy: 0.8157 Epoch 28/100 16/16 [==============================] - 50s 3s/step - loss: 0.0376 - accuracy: 1.0000 - val_loss: 0.7933 - val_accuracy: 0.8137 Epoch 29/100 16/16 [==============================] - 49s 3s/step - loss: 0.0355 - accuracy: 1.0000 - val_loss: 0.7889 - val_accuracy: 0.8157 Epoch 30/100 16/16 [==============================] - 50s 3s/step - loss: 0.0335 - accuracy: 1.0000 - val_loss: 0.7849 - val_accuracy: 0.8137 Epoch 31/100 16/16 [==============================] - 51s 3s/step - loss: 0.0317 - accuracy: 1.0000 - val_loss: 0.7804 - val_accuracy: 0.8167 Epoch 32/100 16/16 [==============================] - 50s 3s/step - loss: 0.0301 - accuracy: 1.0000 - val_loss: 0.7763 - val_accuracy: 0.8157 Epoch 33/100 16/16 [==============================] - 50s 3s/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 0.7724 - val_accuracy: 0.8147 Epoch 34/100 16/16 [==============================] - 45s 3s/step - loss: 0.0272 - accuracy: 1.0000 - val_loss: 0.7689 - val_accuracy: 0.8157 Epoch 35/100 16/16 [==============================] - 50s 3s/step - loss: 0.0260 - accuracy: 1.0000 - val_loss: 0.7653 - val_accuracy: 0.8157 Epoch 36/100 16/16 [==============================] - 46s 3s/step - loss: 0.0248 - accuracy: 1.0000 - val_loss: 0.7620 - val_accuracy: 0.8157 Epoch 37/100 16/16 [==============================] - 45s 3s/step - loss: 0.0237 - accuracy: 1.0000 - val_loss: 0.7589 - val_accuracy: 0.8176 Epoch 38/100 16/16 [==============================] - 50s 3s/step - loss: 0.0227 - accuracy: 1.0000 - val_loss: 0.7563 - val_accuracy: 0.8186 Epoch 39/100 16/16 [==============================] - 50s 3s/step - loss: 0.0217 - accuracy: 1.0000 - val_loss: 0.7536 - val_accuracy: 0.8176 Epoch 40/100 16/16 [==============================] - 50s 3s/step - loss: 0.0208 - accuracy: 1.0000 - val_loss: 0.7512 - val_accuracy: 0.8157 Epoch 41/100 16/16 [==============================] - 51s 3s/step - loss: 0.0200 - accuracy: 1.0000 - val_loss: 0.7487 - val_accuracy: 0.8167 Epoch 42/100 16/16 [==============================] - 46s 3s/step - loss: 0.0192 - accuracy: 1.0000 - val_loss: 0.7455 - val_accuracy: 0.8167 Epoch 43/100 16/16 [==============================] - 45s 3s/step - loss: 0.0185 - accuracy: 1.0000 - val_loss: 0.7432 - val_accuracy: 0.8186 Epoch 44/100 16/16 [==============================] - 46s 3s/step - loss: 0.0178 - accuracy: 1.0000 - val_loss: 0.7412 - val_accuracy: 0.8186 Epoch 45/100 16/16 [==============================] - 50s 3s/step - loss: 0.0171 - accuracy: 1.0000 - val_loss: 0.7389 - val_accuracy: 0.8186 Epoch 46/100 16/16 [==============================] - 46s 3s/step - loss: 0.0165 - accuracy: 1.0000 - val_loss: 0.7373 - val_accuracy: 0.8196 Epoch 47/100 16/16 [==============================] - 50s 3s/step - loss: 0.0160 - accuracy: 1.0000 - val_loss: 0.7351 - val_accuracy: 0.8196 Epoch 48/100 16/16 [==============================] - 51s 3s/step - loss: 0.0154 - accuracy: 1.0000 - val_loss: 0.7333 - val_accuracy: 0.8186 Epoch 49/100 16/16 [==============================] - 52s 3s/step - loss: 0.0149 - accuracy: 1.0000 - val_loss: 0.7311 - val_accuracy: 0.8216 Epoch 50/100 16/16 [==============================] - 47s 3s/step - loss: 0.0144 - accuracy: 1.0000 - val_loss: 0.7296 - val_accuracy: 0.8206 Epoch 51/100 16/16 [==============================] - 47s 3s/step - loss: 0.0139 - accuracy: 1.0000 - val_loss: 0.7274 - val_accuracy: 0.8206 Epoch 52/100 16/16 [==============================] - 51s 3s/step - loss: 0.0135 - accuracy: 1.0000 - val_loss: 0.7262 - val_accuracy: 0.8216 Epoch 53/100 16/16 [==============================] - 46s 3s/step - loss: 0.0131 - accuracy: 1.0000 - val_loss: 0.7249 - val_accuracy: 0.8225 Epoch 54/100 16/16 [==============================] - 56s 4s/step - loss: 0.0127 - accuracy: 1.0000 - val_loss: 0.7231 - val_accuracy: 0.8225 Epoch 55/100 16/16 [==============================] - 56s 4s/step - loss: 0.0123 - accuracy: 1.0000 - val_loss: 0.7215 - val_accuracy: 0.8216 Epoch 56/100 16/16 [==============================] - 53s 3s/step - loss: 0.0119 - accuracy: 1.0000 - val_loss: 0.7200 - val_accuracy: 0.8225 Epoch 57/100 16/16 [==============================] - 51s 3s/step - loss: 0.0116 - accuracy: 1.0000 - val_loss: 0.7183 - val_accuracy: 0.8225 Epoch 58/100 16/16 [==============================] - 58s 4s/step - loss: 0.0113 - accuracy: 1.0000 - val_loss: 0.7173 - val_accuracy: 0.8225 Epoch 59/100 16/16 [==============================] - 54s 3s/step - loss: 0.0109 - accuracy: 1.0000 - val_loss: 0.7163 - val_accuracy: 0.8225 Epoch 60/100 16/16 [==============================] - 47s 3s/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 0.7148 - val_accuracy: 0.8225 Epoch 61/100 16/16 [==============================] - 49s 3s/step - loss: 0.0104 - accuracy: 1.0000 - val_loss: 0.7141 - val_accuracy: 0.8225 Epoch 62/100 16/16 [==============================] - 49s 3s/step - loss: 0.0101 - accuracy: 1.0000 - val_loss: 0.7125 - val_accuracy: 0.8225 Epoch 63/100 16/16 [==============================] - 50s 3s/step - loss: 0.0098 - accuracy: 1.0000 - val_loss: 0.7112 - val_accuracy: 0.8225 Epoch 64/100 16/16 [==============================] - 67s 4s/step - loss: 0.0096 - accuracy: 1.0000 - val_loss: 0.7101 - val_accuracy: 0.8225 Epoch 65/100 16/16 [==============================] - 56s 4s/step - loss: 0.0093 - accuracy: 1.0000 - val_loss: 0.7093 - val_accuracy: 0.8225 Epoch 66/100 16/16 [==============================] - 59s 4s/step - loss: 0.0091 - accuracy: 1.0000 - val_loss: 0.7082 - val_accuracy: 0.8225 Epoch 67/100 16/16 [==============================] - 52s 3s/step - loss: 0.0089 - accuracy: 1.0000 - val_loss: 0.7072 - val_accuracy: 0.8216 Epoch 68/100 16/16 [==============================] - 49s 3s/step - loss: 0.0086 - accuracy: 1.0000 - val_loss: 0.7060 - val_accuracy: 0.8225 Epoch 69/100 16/16 [==============================] - 46s 3s/step - loss: 0.0084 - accuracy: 1.0000 - val_loss: 0.7049 - val_accuracy: 0.8225 Epoch 70/100 16/16 [==============================] - 52s 3s/step - loss: 0.0082 - accuracy: 1.0000 - val_loss: 0.7038 - val_accuracy: 0.8216 Epoch 71/100 16/16 [==============================] - 52s 3s/step - loss: 0.0080 - accuracy: 1.0000 - val_loss: 0.7031 - val_accuracy: 0.8225 Epoch 72/100 16/16 [==============================] - 50s 3s/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 0.7024 - val_accuracy: 0.8225 Epoch 73/100 16/16 [==============================] - 53s 3s/step - loss: 0.0077 - accuracy: 1.0000 - val_loss: 0.7014 - val_accuracy: 0.8216 Epoch 74/100 16/16 [==============================] - 50s 3s/step - loss: 0.0075 - accuracy: 1.0000 - val_loss: 0.7005 - val_accuracy: 0.8225 Epoch 75/100 16/16 [==============================] - 51s 3s/step - loss: 0.0073 - accuracy: 1.0000 - val_loss: 0.6995 - val_accuracy: 0.8225 Epoch 76/100 16/16 [==============================] - 53s 3s/step - loss: 0.0072 - accuracy: 1.0000 - val_loss: 0.6989 - val_accuracy: 0.8235 Epoch 77/100 16/16 [==============================] - 54s 3s/step - loss: 0.0070 - accuracy: 1.0000 - val_loss: 0.6983 - val_accuracy: 0.8225 Epoch 78/100 16/16 [==============================] - 50s 3s/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 0.6970 - val_accuracy: 0.8225 Epoch 79/100 16/16 [==============================] - 50s 3s/step - loss: 0.0067 - accuracy: 1.0000 - val_loss: 0.6965 - val_accuracy: 0.8225 Epoch 80/100 16/16 [==============================] - 49s 3s/step - loss: 0.0066 - accuracy: 1.0000 - val_loss: 0.6955 - val_accuracy: 0.8225 Epoch 81/100 16/16 [==============================] - 52s 3s/step - loss: 0.0064 - accuracy: 1.0000 - val_loss: 0.6950 - val_accuracy: 0.8245 Epoch 82/100 16/16 [==============================] - 51s 3s/step - loss: 0.0063 - accuracy: 1.0000 - val_loss: 0.6942 - val_accuracy: 0.8245 Epoch 83/100 16/16 [==============================] - 49s 3s/step - loss: 0.0062 - accuracy: 1.0000 - val_loss: 0.6935 - val_accuracy: 0.8235 Epoch 84/100 16/16 [==============================] - 51s 3s/step - loss: 0.0060 - accuracy: 1.0000 - val_loss: 0.6928 - val_accuracy: 0.8225 Epoch 85/100 16/16 [==============================] - 59s 4s/step - loss: 0.0059 - accuracy: 1.0000 - val_loss: 0.6924 - val_accuracy: 0.8235 Epoch 86/100 16/16 [==============================] - 61s 4s/step - loss: 0.0058 - accuracy: 1.0000 - val_loss: 0.6919 - val_accuracy: 0.8235 Epoch 87/100 16/16 [==============================] - 60s 4s/step - loss: 0.0057 - accuracy: 1.0000 - val_loss: 0.6911 - val_accuracy: 0.8235 Epoch 88/100 16/16 [==============================] - 61s 4s/step - loss: 0.0056 - accuracy: 1.0000 - val_loss: 0.6905 - val_accuracy: 0.8225 Epoch 89/100 16/16 [==============================] - 52s 3s/step - loss: 0.0054 - accuracy: 1.0000 - val_loss: 0.6899 - val_accuracy: 0.8225 Epoch 90/100 16/16 [==============================] - 51s 3s/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.6894 - val_accuracy: 0.8235 Epoch 91/100 16/16 [==============================] - 48s 3s/step - loss: 0.0052 - accuracy: 1.0000 - val_loss: 0.6885 - val_accuracy: 0.8235 Epoch 92/100 16/16 [==============================] - 50s 3s/step - loss: 0.0051 - accuracy: 1.0000 - val_loss: 0.6879 - val_accuracy: 0.8235 Epoch 93/100 16/16 [==============================] - 46s 3s/step - loss: 0.0050 - accuracy: 1.0000 - val_loss: 0.6874 - val_accuracy: 0.8225 Epoch 94/100 16/16 [==============================] - 49s 3s/step - loss: 0.0050 - accuracy: 1.0000 - val_loss: 0.6869 - val_accuracy: 0.8235 Epoch 95/100 16/16 [==============================] - 49s 3s/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.6863 - val_accuracy: 0.8225 Epoch 96/100 16/16 [==============================] - 48s 3s/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 0.6859 - val_accuracy: 0.8225 Epoch 97/100 16/16 [==============================] - 49s 3s/step - loss: 0.0047 - accuracy: 1.0000 - val_loss: 0.6852 - val_accuracy: 0.8225 Epoch 98/100 16/16 [==============================] - 52s 3s/step - loss: 0.0046 - accuracy: 1.0000 - val_loss: 0.6849 - val_accuracy: 0.8225 Epoch 99/100 16/16 [==============================] - 49s 3s/step - loss: 0.0045 - accuracy: 1.0000 - val_loss: 0.6845 - val_accuracy: 0.8225 Epoch 100/100 16/16 [==============================] - 51s 3s/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 0.6836 - val_accuracy: 0.8225